Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
1.
7th International Extended Reality Conference, XR 2022 ; : 106-111, 2023.
Article in English | Scopus | ID: covidwho-2266966

ABSTRACT

Covid-19 pandemic accelerates the growing use of augmented and virtual reality in various industries, especially in education sector. It is worthy to study whether VR training would apply to technology-accepted learners, i.e., does "If you believe, you will receive” apply to VR training. In this work, the researchers developed an immersive VR interview room system that allows pre-employment learners to try on a simulated environment. Pre-captured interviewer questions are played for the learners get a taste into a real-liked interview. The investigation is the relationship between learners' perceived usefulness and interview self-efficacy in VR training in human resources management. The experiment results show they are positively correlated. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

2.
3rd International Conference on Next Generation Computing Applications, NextComp 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2136450

ABSTRACT

This paper presents an explainable deep learning network to classify COVID from non-COVID based on 3D CT lung images. It applies a subset of the data for MIA-COV19 challenge through the development of 3D form of Vision Transformer deep learning architecture. The data comprise 1924 subjects with 851 being diagnosed with COVID, among them 1,552 being selected for training and 372 for testing. While most of the data volume are in axial view, there are a number of subjects' data are in coronal or sagittal views with 1 or 2 slices are in axial view. Hence, while 3D data based classification is investigated, in this competition, 2D axial-view images remains the main focus. Two deep learning methods are studied, which are vision transformer (ViT) based on attention models and DenseNet that is built upon conventional convolutional neural network (CNN). Initial evaluation results indicates that ViT performs better than DenseNet with F1 scores being 0.81 and 0.72 respectively. (Codes are available at GitHub at https://github.com/xiaohong1/COVID-ViT). This paper illustrates that vision transformer performs the best in comparison to the other current state of the art approaches in classification of COVID from CT lung images. © 2022 IEEE.

3.
Hong Kong Med J ; 28(2): 188-190, 2022 04.
Article in English | MEDLINE | ID: covidwho-1918129
4.
Hong Kong Med J ; 27(3): 232-233, 2021 06.
Article in English | MEDLINE | ID: covidwho-1348808
SELECTION OF CITATIONS
SEARCH DETAIL